22 research outputs found

    Array Designer: automated optimized array design for functional near-infrared spectroscopy

    Get PDF
    The position of each source and detector "optode" on the scalp, and their relative separations, determines the sensitivity of each functional near-infrared spectroscopy (fNIRS) channel to the underlying cortex. As a result, selecting appropriate scalp locations for the available sources and detectors is critical to every fNIRS experiment. At present, it is standard practice for the user to undertake this task manually; to select what they believe are the best locations on the scalp to place their optodes so as to sample a given cortical region-of-interest (ROI). This process is difficult, time-consuming, and highly subjective. Here, we propose a tool, Array Designer, that is able to automatically design optimized fNIRS arrays given a user-defined ROI and certain features of the available fNIRS device. Critically, the Array Designer methodology is generalizable and will be applicable to almost any subject population or fNIRS device. We describe and validate the algorithmic methodology that underpins Array Designer by running multiple simulations of array design problems in a realistic anatomical model. We believe that Array Designer has the potential to end the need for manual array design, and in doing so save researchers time, improve fNIRS data quality, and promote standardization across the field

    Orbital Shrinking: A New Tool for Hybrid MIP/CP Methods

    Full text link
    Abstract. Orbital shrinking is a newly developed technique in the MIP community to deal with symmetry issues, which is based on aggregation rather than on symmetry breaking. In a recent work, a hybrid MIP/CP scheme based on orbital shrinking was developed for the multi-activity shift scheduling problem, showing significant improvements over previ-ous pure MIP approaches. In the present paper we show that the scheme above can be extended to a general framework for solving arbitrary sym-metric MIP instances. This framework naturally provides a new way for devising hybrid MIP/CP decompositions. Finally, we specialize the above framework to the multiple knapsack problem. Computational re-sults show that the resulting method can be orders of magnitude faster than pure MIP approaches on hard symmetric instances.

    Exact and heuristic approaches for directional sensor control

    No full text

    An exploratory computational analysis of dual degeneracy in mixed-integer programming

    No full text
    Dual degeneracy, i.e., the presence of multiple optimal bases to a linear programming (LP) problem, heavily affects the solution process of mixed integer programming (MIP) solvers. Different optimal bases lead to different cuts being generated, different branching decisions being taken and different solutions being found by primal heuristics. Nevertheless, only a few methods have been published that either avoid or exploit dual degeneracy. The aim of the present paper is to conduct a thorough computational study on the presence of dual degeneracy for the instances of well-known public MIP instance collections. How many instances are affected by dual degeneracy? How degenerate are the affected models? How does branching affect degeneracy: Does it increase or decrease by fixing variables? Can we identify different types of degenerate MIPs? As a tool to answer these questions, we introduce a new measure for dual degeneracy: the variable\u2013constraint ratio of the optimal face. It provides an estimate for the likelihood that a basic variable can be pivoted out of the basis. Furthermore, we study how the so-called cloud intervals\u2014the projections of the optimal face of the LP relaxations onto the individual variables\u2014evolve during tree search and the implications for reducing the set of branching candidates

    Implementing Automatic Benders Decomposition in a Modern MIP Solver

    No full text
    We describe the automatic Benders decomposition implemented in the commercial solver IBM CPLEX. We propose several improvements to the state-of-the-art along two lines: making a numerically robust method able to deal with the general case and improving the efficiency of the method on models amenable to decomposition. For the former, we deal with: unboundedness, failures in generating cuts and scaling of the artificial variable representing the objective. For the latter, we propose a new technique to handle so-called generalized bound constraints and we use different types of normalization conditions in the Cut Generating LPs. We present computational experiments aimed at assessing the importance of the various enhancements. In particular, on our test bed of models amenable to a decomposition, our implementation is approximately 5 times faster than CPLEX default branch-and-cut. A remarkable result is that, on the same test bed, default branch-and-cut is faster than a Benders decomposition that doesn’t implement our improvements

    Just MIP it!

    No full text
    Modern Mixed-Integer Programming (MIP) solvers exploit a rich arsenal of tools to attack hard problems. It is widely accepted by the OR community that the solution of very hard MIPs can take advantage from the solution of a series of time-consuming auxiliary Linear Programs (LPs) intended to enhance the performance of the overall MIP solver. E.g., auxiliary LPs may be solved to generate powerful disjunctive cuts, or to implement a strong branching policy. Also well established is the fact that finding good-quality heuristic MIP solutions often requires a computing time that is just comparable to that needed to solve the LP relaxations. So, it makes sense to think of a new generation of MIP solvers where auxiliary MIPs (as opposed to LPs) are heuristically solved on the fly, with the aim of bringing the MIP technology under the chest of the MIP solver itself. This leads to the idea of "translating into a MIP model" (MIPping) some crucial decisions to be taken within a MIP algorithm (How to cut? How to improve the incumbent solution? Is the current node dominated?). In this paper we survey a number of successful applications of the above approach

    Learning Decision Trees with Flexible Constraints and Objectives Using Integer Optimization

    No full text
    We encode the problem of learning the optimal decision tree of a given depth as an integer optimization problem. We show experimentally that our method (DTIP) can be used to learn good trees up to depth 5 from data sets of size up to 1000. In addition to being efficient, our new formulation allows for a lot of flexibility. Experiments show that we can use the trees learned from any existing decision tree algorithms as starting solutions and improve the trees using DTIP. Moreover, the proposed formulation allows us to easily create decision trees with different optimization objectives instead of accuracy and error, and constraints can be added explicitly during the tree construction phase. We show how this flexibility can be used to learn discrimination-aware classification trees, to improve learning from imbalanced data, and to learn trees that minimise false positive/negative errors
    corecore